112 research outputs found

    "More Deterministic" vs. "Smaller" Buechi Automata for Efficient LTL Model Checking

    Get PDF
    The standard technique for LTL model checking (M\models\neg\vi) consists on translating the negation of the LTL specification, \vi, into a B\"uchi automaton A_\vi, and then on checking if the product M \times A_\vi has an empty language. The efforts to maximize the efficiency of this process have so far concentrated on developing translation algorithms producing B\"uchi automata which are ``{\em as small as possible}'', under the implicit conjecture that this fact should make the final product smaller. In this paper we build on a different conjecture and present an alternative approach in which we generate instead B\"uchi automata which are ``{\em as deterministic as possible}'', in the sense that we try to reduce as much as we are able to the presence of non-deterministic decision states in A_\vi. We motivate our choice and present some empirical tests to support this approach

    An abstraction refinement approach combining precise and approximated techniques

    Get PDF
    Predicate abstraction is a powerful technique to reduce the state space of a program to a finite and affordable number of states. It produces a conservative over-approximation where concrete states are grouped together according to a given set of predicates. A precise abstraction contains the minimal set of transitions with regard to the predicates, but as a result is computationally expensive. Most model checkers therefore approximate the abstraction to alleviate the computation of the abstract system by trading off precision with cost. However, approximation results in a higher number of refinement iterations, since it can produce more false counterexamples than its precise counterpart. The refinement loop can become prohibitively expensive for large programs. This paper proposes a new approach that employs both precise (slow) and approximated (fast) abstraction techniques within one abstraction-refinement loop. It allows computing the abstraction quickly, but keeps it precise enough to avoid too many refinement iterations. We implemented the new algorithm in a state-of-the-art software model checker. Our tests with various real-life benchmarks show that the new approach almost systematically outperforms both precise and imprecise technique

    Formalization and Validation of Safety-Critical Requirements

    Full text link
    The validation of requirements is a fundamental step in the development process of safety-critical systems. In safety critical applications such as aerospace, avionics and railways, the use of formal methods is of paramount importance both for requirements and for design validation. Nevertheless, while for the verification of the design, many formal techniques have been conceived and applied, the research on formal methods for requirements validation is not yet mature. The main obstacles are that, on the one hand, the correctness of requirements is not formally defined; on the other hand that the formalization and the validation of the requirements usually demands a strong involvement of domain experts. We report on a methodology and a series of techniques that we developed for the formalization and validation of high-level requirements for safety-critical applications. The main ingredients are a very expressive formal language and automatic satisfiability procedures. The language combines first-order, temporal, and hybrid logic. The satisfiability procedures are based on model checking and satisfiability modulo theory. We applied this technology within an industrial project to the validation of railways requirements

    Tightening the contract refinements of a system architecture

    Get PDF
    Contract-based design is an emerging paradigm for correct-by-construction hierarchical systems: components are associated with assumptions and guarantees expressed as formal properties; the architecture is analyzed by verifying that each contract of composite components is correctly refined by the contracts of its subcomponents. The approach is very efficient, because the overall correctness proof is decomposed into proofs local to each component. However, the process for the contract specification and refinement is quite expensive because the requirements are formalized into formal properties, where part of the complexity is delegated to the designer, who has the burden of specifying the contracts. Typical problems include understanding which contracts are necessary, and how they can be simplified without breaking the correctness of the refinement and other refinements in case some subcontracts are shared. In this paper, we tackle these problems by proposing a technique to understand and simplify the contract refinements of a system architecture during the development process for the contract specification and refinement. The technique, called tightening, is based on parameter synthesis. The idea is to generate a set of parametric proof obligations, where each parameter evaluation corresponds to a variant of the original(s) contract refinement(s), and to search for tighter variants of the contracts that still ensure the correctness of the refinement(s). We cast this approach in the OCRA framework, where contracts are expressed with LTL formulas, and we evaluate its performance and effectiveness on a number of benchmarks.Fil: Cimatti, Alessandro. Fondazione Bruno Kessler; ItaliaFil: Demasi, Ramiro Adrian. Fondazione Bruno Kessler; Italia. Universidad Nacional de Córdoba. Facultad de Matemática, Astronomía y Física. Sección Ciencias de la Computación; ArgentinaFil: Tonetta, Stefano. Fondazione Bruno Kessler; Itali

    GSTE is partitioned model checking

    Get PDF
    Verifying whether an ω-regular property is satisfied by a finite-state system is a core problem in model checking. Standard techniques build an automaton with the complementary language, compute its product with the system, and then check for emptiness. Generalized symbolic trajectory evaluation (GSTE) has been recently proposed as an alternative approach, extending the computationally efficient symbolic trajectory evaluation (STE) to general ω-regular properties. In this paper, we show that the GSTE algorithms are essentially a partitioned version of standard symbolic model-checking (SMC) algorithms, where the partitioning is driven by the property under verification. We export this technique of property-driven partitioning to SMC and show that it typically does speed up SMC algorithm

    Loop summarization using state and transition invariants

    Get PDF
    This paper presents algorithms for program abstraction based on the principle of loop summarization, which, unlike traditional program approximation approaches (e.g., abstract interpretation), does not employ iterative fixpoint computation, but instead computes symbolic abstract transformers with respect to a set of abstract domains. This allows for an effective exploitation of problem-specific abstract domains for summarization and, as a consequence, the precision of an abstract model may be tailored to specific verification needs. Furthermore, we extend the concept of loop summarization to incorporate relational abstract domains to enable the discovery of transition invariants, which are subsequently used to prove termination of programs. Well-foundedness of the discovered transition invariants is ensured either by a separate decision procedure call or by using abstract domains that are well-founded by construction. We experimentally evaluate several abstract domains related to memory operations to detect buffer overflow problems. Also, our light-weight termination analysis is demonstrated to be effective on a wide range of benchmarks, including OS device driver

    From Informal Safety-Critical Requirements to Property-Driven Formal Validation

    Get PDF
    Most of the efforts in formal methods have historically been devoted to comparing a design against a set of requirements. The validation of the requirements themselves, however, has often been disregarded, and it can be considered a largely open problem, which poses several challenges. The first challenge is given by the fact that requirements are often written in natural language, and may thus contain a high degree of ambiguity. Despite the progresses in Natural Language Processing techniques, the task of understanding a set of requirements cannot be automatized, and must be carried out by domain experts, who are typically not familiar with formal languages. Furthermore, in order to retain a direct connection with the informal requirements, the formalization cannot follow standard model-based approaches. The second challenge lies in the formal validation of requirements. On one hand, it is not even clear which are the correctness criteria or the high-level properties that the requirements must fulfill. On the other hand, the expressivity of the language used in the formalization may go beyond the theoretical and/or practical capacity of state-of-the-art formal verification. In order to solve these issues, we propose a new methodology that comprises of a chain of steps, each supported by a specific tool. The main steps are the following. First, the informal requirements are split into basic fragments, which are classified into categories, and dependency and generalization relationships among them are identified. Second, the fragments are modeled using a visual language such as UML. The UML diagrams are both syntactically restricted (in order to guarantee a formal semantics), and enriched with a highly controlled natural language (to allow for modeling static and temporal constraints). Third, an automatic formal analysis phase iterates over the modeled requirements, by combining several, complementary techniques: checking consistency; verifying whether the requirements entail some desirable properties; verify whether the requirements are consistent with selected scenarios; diagnosing inconsistencies by identifying inconsistent cores; identifying vacuous requirements; constructing multiple explanations by enabling the fault-tree analysis related to particular fault models; verifying whether the specification is realizable

    A first-order logic characterization of safety and co-safety languages

    Full text link
    Linear Temporal Logic (LTL) is one of the most popular temporal logics, that comes into play in a variety of branches of computer science. Among the various reasons of its widespread use there are its strong foundational properties: LTL is equivalent to counter-free omega-automata, to star-free omega-regular expressions, and (by Kamp's theorem) to the first-order theory of one successor (S1S[FO]). Safety and co-safety languages, where a finite prefix suffices to establish whether a word does not belong or belongs to the language, respectively, play a crucial role in lowering the complexity of problems like model checking and reactive synthesis for LTL. SafetyLTL (resp., coSafetyLTL) is a fragment of LTL where only universal (resp., existential) temporal modalities are allowed, that recognises safety (resp., co-safety) languages only. The main contribution of this paper is the introduction of a fragment of S1S[FO], called SafetyFO, and of its dual coSafetyFO, which are expressively complete with respect to the LTL-definable safety and co-safety languages. We prove that they exactly characterize SafetyLTL and coSafetyLTL, respectively, a result that joins Kamp's theorem, and provides a clearer view of the characterization of (fragments of) LTL in terms of first-order languages. In addition, it gives a direct, compact, and self-contained proof that any safety language definable in LTL is definable in SafetyLTL as well. As a by-product, we obtain some interesting results on the expressive power of the weak tomorrow operator of SafetyLTL, interpreted over finite and infinite words. Moreover, we prove that, when interpreted over finite words, SafetyLTL (resp. coSafetyLTL) devoid of the tomorrow (resp., weak tomorrow) operator captures the safety (resp., co-safety) fragment of LTL over finite words

    Reactive Synthesis from Extended Bounded Response LTL Specifications

    Get PDF
    Reactive synthesis is a key technique for the design of correct-by-construction systems and has been thoroughly investigated in the last decades. It consists in the synthesis of a controller that reacts to environment's inputs satisfying a given temporal logic specification. Common approaches are based on the explicit construction of automata and on their determinization, which limit their scalability. In this paper, we introduce a new fragment of Linear Temporal Logic, called Extended Bounded Response LTL (\LTLEBR), that allows one to combine bounded and universal unbounded temporal operators (thus covering a large set of practical cases), and we show that reactive synthesis from \LTLEBR specifications can be reduced to solving a safety game over a deterministic symbolic automaton built directly from the specification. We prove the correctness of the proposed approach and we successfully evaluate it on various benchmarks.Comment: Extended Versio

    Model Checking at Scale: Automated Air Traffic Control Design Space Exploration

    Get PDF
    Many possible solutions, differing in the assumptions and implementations of the components in use, are usually in competition during early design stages. Deciding which solution to adopt requires considering several trade-offs. Model checking represents a possible way of comparing such designs, however, when the number of designs is large, building and validating so many models may be intractable. During our collaboration with NASA, we faced the challenge of considering a design space with more than 20,000 designs for the NextGen air traffic control system. To deal with this problem, we introduce a compositional, modular, parameterized approach combining model checking with contract-based design to automatically generate large numbers of models from a possible set of components and their implementations. Our approach is fully automated, enabling the generation and validation of all target designs. The 1,620 designs that were most relevant to NASA were analyzed exhaustively. To deal with the massive amount of data generated, we apply novel data-analysis techniques that enable a rich comparison of the designs, including safety aspects. Our results were validated by NASA system designers, and helped to identify novel as well as known problematic configurations
    • …
    corecore